27 research outputs found

    An Intelligent Context-Aware Biometrics System Based on Agent Technology

    Get PDF
    Traditional biometric systems deal with each instance of identification in the same way irrespective of the circumstances in which the biometric samples were captured at different times or for different applications. Our main objective is to enhance the traditional biometric identification process and improve the decision making process. This is by giving biometric systems an intelligent and flexible identification mechanism using agent technology. Our aim is to develop a multiagent based framework to represent a context-aware adaptive biometric system with multimodalities

    Context-Aware Adaptive Biometrics System using Multiagents

    Get PDF
    Traditional biometric systems are designed and configured to operate in predefined circumstances to address the needs of a particular application. The performance of such biometrics systems tend to decrease because when they encounter varying conditions as they are unable to adapt to such variations. Many real-life scenarios require identification systems to recognise uncooperative people in uncontrolled environments. Therefore, there is a real need to design biometric systems that are aware of their context and be able to adapt to changing conditions. The context-awareness and adaptation of a biometric system are based on a set of factors that include: the application (e.g. healthcare system, border control, unlock smart devices), environment (e.g. quiet/noisy, indoor/outdoor), desired and pre-defined requirements (e.g. speed, usability, reliability, accuracy, robustness to high/low quality samples), user of the system (e.g. cooperative or non-cooperative), the chosen modality (e.g. face, speech, gesture signature), and used techniques (e.g. pre-processing to normalise and clean biometrics data, feature extraction and classification). These factors are linked and might affect each other, hence the system has to work adaptively to meet its overall aim based to its operational context. The aim of this research is to develop a multiagent based framework to represent a context-aware adaptive biometric system. This is to improve the decision making process at each processing step of traditional biometric identification systems. Agents will be used to provide the system with intelligence, adaptation, flexibility, automation, and reliability during the identification process. The framework will accommodate at least five agents, one for each of the five main processing steps of a typical biometric system (i.e. data capture, pre-processing, feature extraction, classification and decision). Each agent can contribute differently towards its designated goal to achieve the best possible solution by selecting/ applying the best technique. For example, an agent can be used to assess the quality of the input biometric sample to ensure the important features can be extracted and processed in further steps. Another agent can be used to pre-process the biometric sample if necessary. A third agent is used to select the appropriate set of features followed by another to select a suitable classifier that works well in a given condition

    Image quality-based adaptive illumination normalisation for face recognition

    Get PDF
    Automatic face recognition is a challenging task due to intra-class variations. Changes in lighting conditions during enrolment and identification stages contribute significantly to these intra-class variations. A common approach to address the effects such of varying conditions is to pre-process the biometric samples in order normalise intra-class variations. Histogram equalisation is a widely used illumination normalisation technique in face recognition. However, a recent study has shown that applying histogram equalisation on well-lit face images could lead to a decrease in recognition accuracy. This paper presents a dynamic approach to illumination normalisation, based on face image quality. The quality of a given face image is measured in terms of its luminance distortion by comparing this image against a known reference face image. Histogram equalisation is applied to a probe image if its luminance distortion is higher than a predefined threshold. We tested the proposed adaptive illumination normalisation method on the widely used Extended Yale Face Database B. Identification results demonstrate that our adaptive normalisation produces better identification accuracy compared to the conventional approach where every image is normalised, irrespective of the lighting condition they were acquired

    Image-Quality-Based Adaptive Face Recognition

    Get PDF
    The accuracy of automated face recognition systems is greatly affected by intraclass variations between enrollment and identification stages. In particular, changes in lighting conditions is a major contributor to these variations. Common approaches to address the effects of varying lighting conditions include preprocessing face images to normalize intraclass variations and the use of illumination invariant face descriptors. Histogram equalization is a widely used technique in face recognition to normalize variations in illumination. However, normalizing well-lit face images could lead to a decrease in recognition accuracy. The multiresolution property of wavelet transforms is used in face recognition to extract facial feature descriptors at different scales and frequencies. The high-frequency wavelet subbands have shown to provide illumination-invariant face descriptors. However, the approximation wavelet subbands have shown to be a better feature representation for well-lit face images. Fusion of match scores from low- and high-frequency-based face representations have shown to improve recognition accuracy under varying lighting conditions. However, the selection of fusion parameters for different lighting conditions remains unsolved. Motivated by these observations, this paper presents adaptive approaches to face recognition to overcome the adverse effects of varying lighting conditions. Image quality, which is measured in terms of luminance distortion in comparison to a known reference image, will be used as the base for adapting the application of global and region illumination normalization procedures. Image quality is also used to adaptively select fusion parameters for wavelet-based multistream face recognition

    Analysis of smartphone model identification using digital images

    Get PDF
    This paper is focused on smartphone model identification using image features. A total of 64 image features - broadly categorized into colour features, wavelet features and image quality features - are extracted from high-resolution smartphone images. A binary-class turned to multiclass support vector machine (SVM) is used as the classifier. Experimental results based on 1800 images captured with 10 different smartphone/tablet devices are promising in correctly identifying source smartphone model. Image quality metrics and wavelet features are shown to contain the most useful device/model information compared to colour features. However, compared to colour features, quality and wavelet features are highly sensitive to simple image modifications. The combined set of colour, quality and wavelet features achieves the overall best identification accuracy

    Illumination and Expression Invariant Face Recognition: Toward Sample Quality-based Adaptive Fusion

    Get PDF
    The performance of face recognition schemes is adversely affected as a result of significant to moderate variation in illumination, pose, and facial expressions. Most existing approaches to face recognition tend to deal with one of these problems by controlling the other conditions. Beside strong efficiency requirements, face recognition systems on constrained mobile devices and PDA's are expected to be robust against all variations in recording conditions that arise naturally as a result of the way such devices are used. Wavelet-based face recognition schemes have been shown to meet well the efficiency requirements. Wavelet transforms decompose face images into different frequency subbands at different scales, each giving rise to different representation of the face, and thereby providing the ingredients for a multi-stream approach to face recognition which stand a real chance of achieving acceptable level of robustness. This paper is concerned with the best fusion strategy for a multi-stream face recognition scheme. By investigating the robustness of different wavelet subbands against variation in lighting conditions and expressions, we shall demonstrate the shortcomings of current non-adaptive fusion strategies and argue for the need to develop an image quality based, intelligent, dynamic fusion strategy

    On the Discrimination Power of Dynamic Features for Online Signature

    Get PDF
    The mobile market has taken huge leap in the last two decades, re-deļ¬ning the rules of communication, networking, socializing and transactions among individuals and organizations. Authentication based on veriļ¬cation of signature on mobile devices, is slowly gaining popularity. Most online signature veriļ¬cation algorithms focus on computing the global Equal Error Rate across all users for a dataset. In this work, contrary to such a representation, it is shown that there are user-speciļ¬c differences on the combined features and user-speciļ¬c differences on each feature of the Equal Error Rate(EER) values. The experiments to test the hypothesis is carried out on the two publicly available dataset using the dynamic time warping algorithm. From the experiments, it is observed that for the MCYT-100 dataset, which yields an overall EER of 0.08, the range of user-speciļ¬c EER is between 0 and 0.27

    Android Pattern Unlock Authentication - effectiveness of local and global dynamic features

    Get PDF
    This study conducts a holistic analysis of the performances of biometric features incorporated into Pattern Unlock authentication. The objective is to enhance the strength of the authentication by adding an implicit layer. Earlier studies have incorporated either global or local dynamic features for verification; however, as found in this paper, different features have variable discriminating power, especially at different extraction levels. The discriminating potential of global, local and their combination are evaluated. Results showed that locally extracted features have higher discriminating power than global features and combining both features gives the best verification performance. Further, a novel feature was proposed and evaluated, which was found to have a varied impact (both positive and negative) on the system performance. From our findings, it is essential to evaluate features (independently and collectively), extracted at different levels (global and local) and different combination for some might impede on the verification performance of the system
    corecore